Search Results: "dkg"

21 December 2013

Daniel Kahn Gillmor: Kevin M. Igoe should step down from CFRG Co-chair

I've said recently that pervasive surveillance is wrong. I don't think anyone from the NSA should have a leadership position in the development or deployment of Internet communications, because their interests are at odds with the interest of the rest of the Internet. But someone at the NSA is in exactly such a position. They ought to step down. Here's the background: The Internet Research Task Force (IRTF) is a body tasked with research into underlying concepts, themes, and technologies related to the Internet as a whole. They act as a research organization that cooperates and complements the engineering and standards-setting activities of the Internet Engineering Task Force (IETF). The IRTF is divided into issue-specific research groups, each of which has a Chair or Co-Chairs who have "wide discretion in the conduct of Research Group business", and are tasked with organizing the research and discussion, ensuring that the group makes progress on the relevant issues, and communicating the general sense of the results back to the rest of the IRTF and the IETF. One of the IRTF's research groups specializes in cryptography: the Crypto Forum Research Group (CFRG). There are two current chairs of the CFRG: David McGrew <mcgrew@cisco.com> and Kevin M. Igoe <kmigoe@nsa.gov>. As you can see from his e-mail address, Kevin M. Igoe is affiliated with the National Security Agency (NSA). The NSA itself actively tries to weaken cryptography on the Internet so that they can improve their surveillance, and one of the ways they try to do so is to "influence policies, standards, and specifications". On the CFRG list yesterday, Trevor Perrin requested the removal of Kevin M. Igoe from his position as Co-chair of the CFRG. Trevor's specific arguments rest heavily on the technical merits of a proposed cryptographic mechanism called Dragonfly key exchange, but I think the focus on Dragonfly itself is the least of the concerns for the IRTF. I've seconded Trevor's proposal, and asked Kevin directly to step down and to provide us with information about any attempts by the NSA to interfere with or subvert recommendations coming from these standards bodies. Below is my letter in full:
From: Daniel Kahn Gillmor <dkg@fifthhorseman.net>
To: cfrg@ietf.org, Kevin M. Igoe <kmigoe@nsa.gov>
Date: Sat, 21 Dec 2013 16:29:13 -0500
Subject: Re: [Cfrg] Requesting removal of CFRG co-chair
On 12/20/2013 11:01 AM, Trevor Perrin wrote:
> I'd like to request the removal of Kevin Igoe from CFRG co-chair.
Regardless of the conclusions that anyone comes to about Dragonfly
itself, I agree with Trevor that Kevin M. Igoe, as an employee of the
NSA, should not remain in the role of CFRG co-chair.
While the NSA clearly has a wealth of cryptographic knowledge and
experience that would be useful for the CFRG, the NSA is apparently
engaged in a series of attempts to weaken cryptographic standards and
tools in ways that would facilitate pervasive surveillance of
communication on the Internet.
The IETF's public position in favor of privacy and security rightly
identifies pervasive surveillance on the Internet as a serious problem:
https://www.ietf.org/media/2013-11-07-internet-privacy-and-security.html
The documents Trevor points to (and others from similar stories)
indicate that the NSA is an organization at odds with the goals of the IETF.
While I want the IETF to continue welcoming technical insight and
discussion from everyone, I do not think it is appropriate for anyone
from the NSA to be in a position of coordination or leadership.
----
Kevin, the responsible action for anyone in your position is to
acknowledge the conflict of interest, and step down promptly from the
position of Co-Chair of the CFRG.
If you happen to also subscribe to the broad consensus described in the
IETF's recent announcement -- that is, if you care about privacy and
security on the Internet -- then you should also reveal any NSA activity
you know about that attempts to subvert or weaken the cryptographic
underpinnings of IETF protocols.
Regards,
	--dkg
I'm aware that an abdication by Kevin (or his removal by the IETF chair) would probably not end the NSA's attempts to subvert standards bodies or weaken encryption. They could continue to do so by subterfuge, for example, or by private influence on other public members. We may not be able to stop them from doing this in secret, and the knowledge that they may do so seems likely to cast a pall of suspicion over any IETF and IRTF proceedings in the future. This social damage is serious and troubling, and it marks yet another cost to the NSA's reckless institutional disregard for civil liberties and free communication. But even if we cannot rule out private NSA influence over standards bodies and discussion, we can certainly explicitly reject any public influence over these critical communications standards by members of an institution so at odds with the core principles of a free society. Kevin M. Igoe, please step down from the CFRG Co-chair position. And to anyone (including Kevin) who knows about specific attempts by the NSA to undermine the communications standards we all rely on: please blow the whistle on this kind of activity. Alert a friend, a colleague, or a journalist. Pervasive surveillance is an attack on all of us, and those who resist it are heroes.

18 December 2013

Daniel Kahn Gillmor: automatically have uscan check signatures

If you maintain software in debian, one of your regular maintenance tasks is checking for new upstream versions, reviewing them, and preparing them for debian if appropriate. One of those steps is often to verify the cryptographic signature on the upstream source archive. At the moment, most maintainers do the cryptographic check manually, or maybe even don't bother to do it at all. For the common case of detached OpenPGP signatures, though, uscan can now do it for you automatically (as of devscripts version 2.13.3). You just need to tell uscan what keys you expect upstream to be signing with, and how to find the detached signature. So, for example, Damien Miller recently announced his new key that he will be using to sign OpenSSH releases (his new key has OpenPGP fingerprint 59C2 118E D206 D927 E667 EBE3 D3E5 F56B 6D92 0D30 -- you can verify it has been cross-signed by his older key, and his older key has been revoked with the indication that it was superceded by this one). Having done a reasonable verification of Damien's key, if i was the openssh package maintainer, i'd do the following:
cd ~/src/openssh/
gpg --export '59C2 118E D206 D927 E667  EBE3 D3E5 F56B 6D92 0D30' >> debian/upstream-signing-key.pgp
And then upon noticing that the signature files are named with a simple .asc suffix on the upstream distribution site, we can use the following pgpsigurlmangle option in debian/watch:
version=3
opts=pgpsigurlmangle=s/$/.asc/ ftp://ftp.openbsd.org/pub/OpenBSD/OpenSSH/portable/openssh-(.*)\.tar\.gz 
I've filed this specific example as debian bug #732441. If you notice a package with upstream signatures that aren't currently being checked by uscan (or if you are upstream, you sign your packages, and you want your debian maintainer to verify them), you can file similar bugs. Or, if you maintain a package for debian, you can just fix up your package so that this check is there on the next upload. If you maintain a package whose upstream doesn't sign their releases, ask them why not -- wouldn't upstream prefer that their downstream users can verify that each release wasn't tampered with? Of course, none of these checks take the the place of the real work of a debian package maintainer: reviewing the code and the changelogs, thinking about what changes have happened, and how they fit into the broader distribution. But it helps to automate one of the basic safeguards we should all be using. Let's eliminate the possibility that the file was tampered with at the upstream distribution mirror or while in transit over the network. That way, the maintainer's time and energy can be spent where they're more needed.Tags: crypto, devscripts, openpgp, package maintenance, signatures, uscan

13 December 2013

Daniel Kahn Gillmor: OpenPGP Key IDs are not useful

Fingerprints and Key IDsOpenPGPv4 fingerprints are made from an SHA-1 digest over the key's public key material, creation date, and some boilerplate. SHA-1 digests are 160 bits in length. The "long key ID" of a key is the last 64 bits of the key's fingerprint. The "short key ID" of a key is the last 32 bits of the key's fingerprint. You can see both of the key IDs as a hash in and of themselves, as "32-bit truncated SHA-1" is a sort of hash (albeit not a cryptographically secure one). I'm arguing here that short Key IDs and long Key IDs are actually useless, and we should stop using them entirely where we can do so. We certainly should not be exposing normal human users to them. (Note that I am not arguing that OpenPGP v4 fingerprints themselves are cryptographically insecure. I do not believe that there are any serious cryptographic risks currently associated with OpenPGP v4 fingerprints. This post is about Key IDs specifically, not fingerprints.)Key IDs have serious problemsAsheesh pointed out two years ago that OpenPGP short key IDs are bad because they are trivial to replicate. This is called a preimage attack against the short key ID (which is just a truncated fingerprint). Today, David Leon Gil demonstrated that a collision attack against the long key ID is also trivial. A collision attack differs from a preimage attack in that the attacker gets to generate two different things that both have the same digest. Collision attacks are easier than preimage attacks because of the birthday paradox. dlg's colliding keys are not a surprise, but hopefully the explicit demonstration can serve as a wakeup call to help us improve our infrastructure. So this is not a way to spoof a specific target's long key ID on its own. But it indicates that it's more of a worry than most people tend to think about or plan for. And remember that for a search space as small as 64-bits (the long key ID), if you want to find a pre-image against any one of 2k keys, your search is actually only in a (64-k)-bit space to find a single pre-image. The particularly bad news: gpg doesn't cope well with the two keys that have the same long key ID:
0 dkg@alice:~$ gpg --import x
gpg: key B8EBE1AF: public key "9E669861368BCA0BE42DAF7DDDA252EBB8EBE1AF" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)
0 dkg@alice:~$ gpg --import y
gpg: key B8EBE1AF: doesn't match our copy
gpg: Total number processed: 1
2 dkg@alice:~$ 
This probably also means that caff (from the signing-party package) will also choke when trying to deal with these two keys. I'm sure there are other OpenPGP-related tools that will fail in the face of two keys with matching 64-bit key IDs. We should not use Key IDs I am more convinced than ever that key IDs (both short and long) are actively problematic to real-world use of OpenPGP. We want two things from a key management framework: unforgability, and human-intelligible handles. Key IDs fail at both. So reasonable tools should not expose either short or long key IDs to users, or use them internally if they can avoid them. They do not have any properties we want, and in the worst case, they actively mislead people or lead them into harm. What reasonable tool should do that? How to replace Key IDs If we're not going to use Key IDs, what should we do instead? For anything human-facing, we should be using human-intelligible things like user IDs and creation dates. These are trivial to forge, but people can relate to them. This is better than offering the user something that is also trivial to forge, but that people cannot relate to. The job of any key management UI should be to interpret the cryptographic assurances provided by the certifications and present that to the user in a comprehensible way. For anything not human-facing (e.g. key management data storage, etc), we should be using the full key itself. We'll also want to store the full fingerprint as an index, since that is used for communication and key exchange (e.g. on calling cards). There remain parts of the spec (e.g. PK-ESK, Issuer subpackets) that make some use of the long key ID in ways that provide some measure of convenience but no real cryptographic security. We should fix the spec to stop using those, and either remove them entirely, or replace them with the full fingerprints. These fixes are not as urgent as the user-facing changes or the critical internal indexing fixes, though. Key IDs are not useful. We should stop using them.Tags: collision, crypto, gpg, openpgp, pgp, security

5 December 2013

Daniel Kahn Gillmor: The legal utility of deniability in secure chat

This Monday, I attended a workshop on Multi-party Off the Record Messaging and Deniability hosted by the Calyx Institute. The discussion was a combination of legal and technical people, looking at how the characteristics of this particular technology affect (or do not affect) the law. This is a report-back, since I know other people wanted to attend. I'm not a lawyer, but I develop software to improve communications security, I care about these questions, and I want other people to be aware of the discussion. I hope I did not misrepresent anything below. I'd be happy if anyone wants to offer corrections. BackgroundOff the Record Messaging (OTR) is a way to secure instant messaging (e.g. jabber/XMPP, gChat, AIM). The two most common characteristics people want from a secure instant messaging program are:
Authentication
Each participant should be able to know specifically who the other parties are on the chat.
Confidentiality
The content of the messages should only be intelligible to the parties involved with the chat; it should appear opaque or encrypted to anyone else listening in. Note that confidentiality effectively depends on authentication -- if you don't know who you're talking to, you can't make sensible assertions about confidentiality.
As with many other modern networked encryption schemes, OTR relies on each user maintaining a long-lived "secret key", and publishing a corresponding "public key" for their peers to examine. These keys are critical for providing authentication (and by extension, for confidentiality). But OTR offers several interesting characteristics beyond the common two. Its most commonly cited characteristics are "forward secrecy" and "deniability".
Forward secrecy
Assuming the parties communicating are operating in good faith, forward secrecy offers protection against a special kind of adversary: one who logs the encrypted chat, and subsequently steals either party's long-term secret key. Without forward secrecy, such an adversary would be able to discover the content of the messages, violating the confidentiality characteristic. With forward secrecy, this adversary is be stymied and the messages remain confidential.
Deniability
Deniability only comes into play when one of the parties is no longer operating in good faith (e.g. their computer is compromised, or they are collaborating with an adversary). In this context, if Alice is chatting with Bob, she does not want Bob to be able to cryptographically prove to anyone else that she made any of the specific statements in the conversation. This is the focus of Monday's discussion. To be clear, this kind of deniability means Alice can correctly say "you have no cryptographic proof I said X", but it does not let her assert "here is cryptographic proof that I did not say X" (I can't think of any protocol that offers the latter assertion). The opposite of deniability is a cryptographic proof of origin, which usually runs something like "only someone with access to Alice's secret key could have said X."
The traditional two-party OTR protocol has offered both forward secrecy and deniability for years. But deniability in particular is a challenging characteristic to provide for group chat which is the domain of Multi-Party OTR (mpOTR). You can read some past discussion about the challenges of deniability in mpOTR (and why it's harder when there are more than two people chatting) from the otr-users mailing list. If you're not doing anything wrong... The discussion was well-anchored by a comment from another participant who cheekily asked "If you're not doing anything wrong, why do you need to hide your chat at all, let alone be able to deny it?" The general sense of the room was that we'd all heard this question many times, from many people. There are lots of problems with the ideas behind the question from many perspectives. But just from a legal perspective, there are at least two problems with the way this question is posed: In these situations, people confront real risk from the law. If we care about these people, we need to figure out if we can build systems to help them reduce that legal risk (of course we also need to fix broken laws, and the legal environment in general, but those approaches were out of scope for this discussion). The Legal Utility of Deniability Monday's meeting was called specifically because it wasn't clear how much real-world usefulness there is in the "deniability" characteristic, and whether this feature is worth the development effort and implementation tradeoffs required. In particular, the group was interested in deniability's utility in legal contexts; many (most?) people in the room were lawyers, and it's also not clear that deniability has much utility outside of a formal legal setting. If your adversary isn't constrained by some rule of law, they probably won't care at all whether there is a cryptographic proof or not that you wrote a particular message (In retrospect, one possible exception is exposure in the media, but we did not discuss that scenario). Places of possible usefulness So where might deniability come in handy during civil litigation or a criminal trial? Presumably the circumstance is that a piece of a chat log is offered as incriminating evidence, and the defendant is trying to deny something that they appear to have said in the log. This denial could take place in two rather different contexts: during rules over admissibility of evidence, or (once admitted) in front of a jury. In legal wrangling over admissibility, apparently a lot of horse-trading can go on -- each side concedes some things in exchange for the other side conceding other things. It appears that cryptographic proof of origin (that is, a lack of deniability) on the chat logs themselves might reduce the amount of leverage a defense lawyer can get from conceding or arguing strongly over that piece of evidence. For example, if the chain of custody of a chat transcript is fuzzy (i.e. the transcript could have been mishandled or modified somehow before reaching trial), then a cryptographic proof of origin would make it much harder for the defense to contest the chat transcript on the grounds of tampering. Deniability would give the defense more bargaining power. In arguing about already-admitted evidence before a jury, deniability in this sense seems like a job for expert witnesses, who would need to convince the jury of their interpretation of the data. There was a lot of skepticism in the room over this, both around the possibility of most jurors really understanding what OTR's claim of deniability actually means, and on jurors' ability to distinguish this argument from a bogus argument presented by an opposing expert witness who is willing to lie about the nature of the protocol (or who misunderstands it and passes on their misunderstanding to the jury). The complexity of the tech systems involved in a data-heavy prosecution or civil litigation are themselves opportunities for lawyers to argue (and experts to weigh in) on the general reliability of these systems. Sifting through the quantities of data available and ensuring that the appropriate evidence is actually findable, relevant, and suitably preserved for the jury's inspection is a hard and complicated job, with room for error. OTR's deniability might be one more element in a multi-pronged attack on these data systems. These are the most compelling arguments for the legal utility of deniability that I took away from the discussion. I confess that they don't seem particularly strong to me, though some level of "avoiding a weaker position when horse-trading" resonates with me. What about the arguments against its utility? Limitations The most basic argument against OTR's deniability is that courts don't care about cryptographic proof for digital evidence. People are convicted or lose civil cases based on unsigned electronic communications (e.g. normal e-mail, plain chat logs) all the time. OTR's deniability doesn't provide any legal cover stronger than trying to claim you didn't write a given e-mail that appears to have originated from your account. As someone who understands the forgeability of e-mail, i find this overall situation troubling, but it seems to be where we are. Worse, OTR's deniability doesn't cover whether you had a conversation, just what you said in that conversation. That is, Bob can still cryptographically prove to an adversary (or before a judge or jury) that he had a communication with someone controlling Alice's secret key (which is probably Alice); he just can't prove that Alice herself said any particular part of the conversation he produces. Additionally, there are runtime tradeoffs depending on how the protocol manages to achieve these features. For example, forward secrecy itself requires an additional round trip or two when compared to authenticated, encrypted communications without forward secrecy (a "round trip" is a message from Alice to Bob followed by a message back from Bob to Alice). Getting proper deniability into the mpOTR spec might incur extra latency (imagine having to wait 60 seconds after everyone joins before starting a group chat, or a pause in the chat of 15 seconds when a new member joins) or extra computational power (meaning that they might not work well on slower/older devices) or an order of magnitude more bandwidth (meaning that chat might not work at all on a weak connection). There could also simply be complexity that makes it harder to correctly implement a protocol with deniability than an alternate protocol without deniability. Incorrectly-implemented software can put its users at risk. I don't know enough about the current state of mpOTR to know what the specific tradeoffs are for the deniability feature, but it's clear there will be some. Who decides whether the tradeoffs are worth the feature? Other kinds of deniability Further weakening the case for the legal utility of OTR's deniability, there seem to be other ways to get deniability in a legal context over a chat transcript. There are deniability arguments that can be made from outside the protocol. For example, you can always claim someone else took control of your computer while you were asleep or using the bathroom or eating dinner, or you can claim that your computer had a virus that exported your secret key and it must have been used by someone else. If you're desperate enough to sacrifice your digital identity, you could arrange to have your secret key published, at which point anyone can make signed statements with it. Having forward secrecy makes it possible to expose your secret key without exposing the content of your past communications to any listener who happened to log them. Conclusion My takeaway from the discussion is that the legal utility of OTR's deniability is non-zero, but quite low; and that development energy focused on deniability is probably only justified if there are very few costs associated with it. Several folks pointed out that most communications-security tools are too complicated or inconvenient to use for normal people. If we have limited development energy to spend on securing instant messaging, usability and ubiquity would be a better focus than this form of deniability. Secure chat systems that take too long to make, that are too complex, or that are too cumbersome are not going to be adopted. But this doesn't mean people won't chat at all -- they'll just use cleartext chat, or maybe they'll use supposedly "secure" protocols with even worse properties: for example, without proper end-to-end authentication (permitting spoofing or impersonation by the server operator or potentially by anyone else); with encryption that is reversible by the chatroom operator or flawed enough to be reversed by any listener with a powerful computer; without forward secrecy; or so on. As a demonstration of this, we heard some lawyers in the room admit to using Skype to talk with their clients even though they know it's not a safe communications channel because their clients' adversaries might have access to the skype messaging system itself. My conclusion from the meeting is that there are a few particular situations where deniability could be useful legally, but that overall, it is not where we as a community should be spending our development energy. Perhaps in some future world where all communications are already authenticated, encrypted, and forward-secret by default, we can look into improving our protocols to provide this characteristic, but for now, we really need to work on usability, popularization, and wide deployment. Thanks Many thanks to Nick Merrill for organizing the discussion, to Shayana Kadidal and Stanley Cohen for providing a wealth of legal insight and legal experience, to Tom Ritter for an excellent presentation of the technical details, and to everyone in the group who participated in the interesting and lively discussion. Tags: chat, deniability, otr, security

30 October 2013

Daniel Kahn Gillmor: getting to TLS (STARTTLS HOWTO)

Many protocols today allow you to upgrade to TLS from within a cleartext version of the protocol. This often falls under the rubric of "STARTTLS", though different protocols have different ways of doing it. I often forget the exact steps, and when i'm debugging a TLS connection (e.g. with tools like gnutls-cli) i need to poke a remote peer into being ready for a TLS handshake. So i'm noting the different mechanisms here. lines starting with C: are from the client, lines starting with S: are from the server. many of these are (roughly) built into openssl s_client, using the -starttls option. Sometimes this doesn't work because the handshake needs tuning for a given server; other times you want to do this with a different TLS library. To use the techniques below with gnutls-cli from the gnutls-bin package, just provide the --starttls argument (and the appropriate --port XXX argument), and then hit Ctrl+D when you think it's ok to start the TLS negotiation. SMTP The polite SMTP handshake (on port 25 or port 587) that negotiates a TLS upgrade looks like:
C: EHLO myhostname.example
S: [...]
S: 250-STARTTLS
S: [...]
S: 250 [somefeature]
C: STARTTLS
S: 220 2.0.0 Ready to start TLS
<Client can begin TLS handshake>
IMAP The polite IMAP handshake (on port 143) that negotiates a TLS upgrade looks like:
S: OK [CAPABILITY IMAP4rev1 [...] STARTTLS [...]] [...]
C: A STARTTLS
S: A OK Begin TLS negotiation now
<Client can begin TLS handshake>
POP The polite POP handshake (on port 110) that negotiates a TLS upgrade looks like:
S: +OK POP3 ready
C: STLS
S: +OK Begin TLS 
<Client can begin TLS handshake>
XMPP The polite XMPP handshake (on port 5222 for client-to-server, or port 5269 for server-to-server) that negiotiates a TLS upgrade looks something like (note that the domain requested needs to be the right one):
C: <?xml version="1.0"?><stream:stream to="example.net"
C:  xmlns="jabber:client" xmlns:stream="http://etherx.jabber.org/streams" version="1.0">
S: <?xml version='1.0'?>
S: <stream:stream
S:  xmlns:db='jabber:server:dialback'
S:  xmlns:stream='http://etherx.jabber.org/streams'
S:  version='1.0'
S:  from='example.net'
S:  id='d34edc7c-22bd-44b3-9dba-8162da5b5e72'
S:  xml:lang='en'
S:  xmlns='jabber:server'>
S: <stream:features>
S: <dialback xmlns='urn:xmpp:features:dialback'/>
S: <starttls xmlns='urn:ietf:params:xml:ns:xmpp-tls'/>
S: </stream:features>
C: <starttls xmlns="urn:ietf:params:xml:ns:xmpp-tls" id="1"/>
S: <proceed xmlns='urn:ietf:params:xml:ns:xmpp-tls'/>
<Client can begin TLS handshake>
NNTP RogerBW (in the comments below) points out that NNTP has TLS support:
C: CAPABILITIES
S: [...]
S: STARTTLS
S: [...]
S: .
C: STARTTLS
S: 382 Continue with TLS negotiation
PostgreSQL I got mail from James Cloos suggesting how to negotiate an upgrade to TLS over the PostgreSQL RDBMS. He points to the protocol docs, and in particular, to multiple protocol flow documents, and SSLRequest and StartupMessage chunks of the protocol spec (and clarification that data is sent in network byte order). It won't work in a text-mode communication, but it's worth noting here anyway: The client starts by sending these eight octets:
0x00 0x00 0x00 0x08 0x04 0xD2 0x16 0x2F
and the server replies with 'S' for secure or 'N' for not. If the reply is S, TLS negotiation follows. The message represents int32(8) specifying that there are 8 octets and int16(1234),int16(5678). All sent in network order. (The non-TLS case starts with a similar message with int16(3),int16(0) for protocol version 3.0. Starttls is essentially pg protocol version 1234.5678.) what else? I don't know (but would like to) how to do: If you know other mechanisms, or see bugs with the simple handshakes i've posted above, please let me know either by e-mail or on the comments here. Other interesting notes: RFC 2817, a not-widely-supported mechanism for upgrading to TLS in the middle of a normal HTTP session.Tags: gnutls, imap, pop, postgresql, smtp, starttls, tls, xmpp

8 October 2013

Daniel Kahn Gillmor: Unaccountable surveillance is wrong

As I mentioned earlier, the information in the documents released by Edward Snowden show a clear pattern of corporate and government abuse of the information networks that are now deeply intertwined with the lives of many people all over the world. Surveillance is a power dynamic where the party doing the spying has power over the party being surveilled. The surveillance state that results when one party has "Global Cryptologic Dominance" is a seriously bad outcome. The old saw goes "power corrupts, and absolute power corrupts absolutely". In this case, the stated goal of my government appears to be absolute power in this domain, with no constraint on the inevitable corruption. If you are a supporter of any sort of a just social contract (e.g. International Principles on the Application of Human Rights to Communications Surveillance), the situation should be deeply disturbing. One of the major sub-threads in this discussion is how the NSA and their allies have actively tampered with and weakened the cryptographic infrastructure that everyone relies on for authenticated and confidential communications on the 'net. This kind of malicious work puts everyone's communication at risk, not only those people who the NSA counts among their "targets" (and the NSA's "target" selection methods are themselves fraught with serious problems). The US government is supposed to take pride in the checks and balances that keep absolute power out of any one particular branch. One of the latest attempts to simulate "checks and balances" was the President's creation of a "Review Group" to oversee the current malefactors. The review group then asked for public comment. A group of technologists (including myself) submitted a comment demanding that the review group provide concrete technical details to independent technologists. Without knowing the specifics of how the various surveillance mechanisms operate, the public in general can't make informed assessments about what they should consider to be personally safe. And lack of detailed technical knowledge also makes it much harder to mount an effective political or legal opposition to the global surveillance state (e.g. consider the terrible Clapper v. Amnesty International decision, where plaintiffs were denied standing to sue the Director of National Intelligence because they could not demonstrate that they were being surveilled). It's also worth noting that the advocates for global surveillance do not themselves want to be surveilled, and that (for example) the NSA has tried to obscure as much of their operations as possible, by over-classifying documents, and making spurious claims of "national security". This is where the surveillance power dynamic is most baldly in play, and many parts of the US government intelligence and military apparatus has a long history of acting in bad faith to obscure its activities. The people who have been operating these surveillance systems should be ashamed of their work, and those who have been overseeing the operation of these systems should be ashamed of themselves. We need to better understand the scope of the damage done to our global infrastructure so we can repair it if we have any hope of avoiding a complete surveillance state in the future. Getting the technical details of these compromises in the hands of the public is one step on the path toward a healthier society. PostscriptLest I be accused of optimism, let me make clear that fixing the technical harms is necessary, but not sufficient; even if our technical infrastructure had not been deliberately damaged, or if we manage to repair it and stop people from damaging it again, far too many people still regularly accept ubiquitous private (corporate) surveillance. Private surveillance organizations (like Facebook and Google) are too often in a position where their business interests are at odds with their users' interests, and powerful adversaries can use a surveillance organization as a lever against weaker parties. But helping people to improve their own data sovereignty and to avoid subjecting their friends and allies to private surveillance is a discussion for a separate post, i think.Tags: cryptography, nsa

28 September 2013

Daniel Kahn Gillmor: RIP Cookiepuss

Yesterday, i said a sad goodbye to an old friend at ABC No Rio. Cookiepuss was a steadfast companion in my volunteer shifts at the No Rio computer center, a cranky yet gregarious presence. I met her soon after moving to New York, and have hung out with her nearly every week for years. [Cookiepuss -- No Dogs No Masters] She had the run of the building at ABC No Rio, and was friends with all sorts of people. She was known and loved by punks and fine artists, by experimental musicians and bike mechanics, computer geeks and librarians, travelers and homebodies, photographers, screenprinters, anarchists, community organizers, zinesters, activists, performers, and weirdos of all stripes. For years, she received postcards from all over the world, including several from people who had never even met her in person. In her younger days, she was a ferocious mouser, and even as she shrank with age and lost some of her teeth she remained excited about food. She was an inveterate complainer; a pants-shredder; a cat remarkably comfortable with dirt; a welcoming presence to newcomers and a friendly old curmudgeon who never seemed to really hold a grudge even when i had to do horrible things like help her trim her nails. After a long life, she died having said her goodbyes, and surrounded by people who loved her. I couldn't have asked for better, but I miss her fiercely.

25 September 2013

Daniel Kahn Gillmor: half a minute for science!

A friend is teaching a class on data analysis. She is building a simple and rough data set for the class to examine, and to spur discussion. You can contribute in half a minute! Here's how:
  1. get a stopwatch or other sort of timer (whatever device you're reading this on probably has such a thing).
  2. start the timer, but don't look at it.
  3. wait for what you think is 30 seconds, and then look at the timer
  4. how many actual seconds elapsed?
The data doesn't need to be particularly high-precision (whole second values are fine). The other data points my friend is looking for are age (in years, again, whole numbers are fine) and gender. You can send me your results by e-mail, (i suspect you can find my address if you're reading this blog). Please put "half a minute for science" in the subject line, and make sure you include: Science thanks you!Tags: science

10 September 2013

Daniel Kahn Gillmor: Support privacy-respecting network services!

Support privacy-respecting network services! Donate to Riseup.net! There's a lot of news recently about some downright orwellian surveillance executed across the globe by my own government with the assistance of major American corporations. The scope is huge, and the implications are depressing. It's scary and frustrating for anyone who cares about civil society, freedom of speech, cultural autonomy, or data sovereignty. As bad as the situation is, though, there are groups like Riseup and May First/People Link who actively resist the data dragnet. The good birds at Riseup have been tireless advocates for information autonomy for people and groups working for liberatory social change for years. They have provided (and continue to provide) impressive, well-administered infrastructure using free software to help further these goals, and they have a strong political committment to making a better world for all of us and to resisting strongarm attempts to turn over sensitive data. And they provide all this expertise and infrastructure and support on a crazy shoestring of a budget. So if the news has got you down, or frustrated, or upset, and you want to do something to help improve the situation, you could do a lot worse than sending some much-needed funds to help Riseup maintain an expanding infrastructure. This fundraising campaign will only last a few more days, so give now if you can! (note: i have worked with some of the riseup birds in the past, and hope to continue to do so in the future. I consider it critically important to have them as active allies in our collective work toward a better world, which is why i'm doing the unusual thing of asking for donations for them on my blog.)

20 May 2013

Daniel Kahn Gillmor: gpg --ask-cert-level considered harmful

Occasionally, someone asks me whether we should encourage use of the --ask-cert-level option when certifying OpenPGP keys with gpg. I see no good reason to use this option, and i think we should discourage people from trying to use it. I don't think there is a satisfactory answer to the question "how will specifying the level of identity certification concretely benefit anyone involved?", and i don't see why we should want one. gpg gets it absolutely right by not asking users this question by default. People should not be enabling this option. Some background: gpg's --ask-cert-level option allows the user who is making an OpenPGP identity certification to indicate just how sure they are of the identity they are certifying. The user's choice is then mapped into four levels of OpenPGP certification of a User ID and Public-Key packet, which i'll refer to by their signature type identifiers in the OpenPGP spec:
0x10: Generic certification
The issuer of this certification does not make any particular assertion as to how well the certifier has checked that the owner of the key is in fact the person described by the User ID.
0x11: Persona certification
The issuer of this certification has not done any verification of the claim that the owner of this key is the User ID specified.
0x12: Casual certification
The issuer of this certification has done some casual verification of the claim of identity.
0x13: Positive certification
The issuer of this certification has done substantial verification of the claim of identity.
Most OpenPGP implementations make their "key signatures" as 0x10 certifications. Some implementations can issue 0x11-0x13 certifications, but few differentiate between the types.
By default (if --ask-cert-level is not supplied), gpg issues certificates ("signs keys") using 0x10 (generic) certifications, with the exception of self-sigs, which are made as type 0x13 (positive). When interpreting certifications, gpg does distinguish between different certifications in one particular way: 0x11 (persona) certifications are ignored; other certifications are not. (users can change this cutoff with the --min-cert-level option, but it's not clear why they would want to do so). So there is no functional gain in declaring the difference between a "normal" certification and a "positive" one, even if there were a well-defined standard by which to assess the difference between the "generic" and "casual" or "positive" levels; and if you're going to make a "persona" certification, you might as well not make one at all. And it gets worse: the problem is not just that such an indication is functionally useless; encouraging people to make these kind of assertions actively encourages leaks of a more-detailed social graph than just encouraging everyone to use the default blanket 0x13-for-self-sigs, 0x10-for-everyone-else policy. A richer public social graph means more data that can feed the ravenous and growing appetite of the advertising-and-surveillance regimes. i find these regimes troubling. I admit that people often leak much more information than this indication of "how well do you know X" via tools like Facebook, but that's no excuse to encourage them to leak still more or to acclimatize people to the idea that the details of their personal relationships should by default be public knowledge. Lastly, the more we keep the OpenPGP network of identity certifications (a.k.a. the "web of trust") simple, the easier it is to make sensible and comprehensible and predictable inferences from the network about whether a key really does belong to a given user. Minimizing the complexity and difficulty of deciding to make a certification helps people streamline their signing processes and reduces the amount of cognitive overhead people spend just building the network in the first place.Tags: openpgp

15 May 2013

Daniel Kahn Gillmor: OpenPGP User ID Comments considered harmful

Most OpenPGP User IDs look like this:
Jane Q. Public <jane@example.org>
This is clean, clear, and unambiguous. However, some tools (gpg, enigmail among others) ask the user to provide a "Comment:" field when they are choosing a new User ID (e.g. when making a new key). These UI prompts are evil. The savvy user knows to avoid entering anything in this field, so that they can end up with a User ID like the one above. The user who provides something here (perhaps even something inconsequential like "I like strawberries", due to not being sure what should go in this little box) will instead end up with a User ID like:
Jane Q. Public (I like strawberries) <jane@example.org>
This is bad. This means that Jane is asking the people who certify her key+userid to certify whether she actually likes strawberries (how could they know? what if she changes her mind? should they revoke their certifications?) and anywhere that she is referred to by name will include this mention of strawberries. This is not Jane's identity, and it doesn't belong in an OpenPGP User ID packet. Furthermore, since User IDs are atomic, if Jane wants to change the comment field (but leave her name and e-mail address the same), she will instead need to create a new User ID, publish it, get everyone who has certified her old key+userid to certify the key+newuserid, and then revoke the old one. It is difficult already to help people understand and participate in the certification network that forms that backbone of OpenPGP's so-called "web of trust". These bogus comment fields make an already-difficult task harder. And all because of strawberries! Tools like enigmail and gpg should not expose the "Comment:" field to users who are generating keys or choosing new User IDs. If they feel it absolutely must be present for some weird corner case that 0.1% of their users will have, they could require that the user enters some sort of "expert mode" before prompting the user to do something that is likely to be a mistake. There is almost no legitimate reason for anyone to use this field. Let's go through some examples of this people use, taken from some examples i have lying around (identifying marks have been changed to protect the innocent who were duped by this bad UI choice, but you can probably find them on the public keyserver network if you want to hunt around):
domain repetition
John Q. Public (Debian) <johnqpublic@debian.org>
We know you're with debian already from the @debian.org address. If this is in contrast to your other address (johnqpublic@example.org) so that people know where to send you debian-related e-mail, this is still not necessary. Lest you think i'm just calling out debian developers, people with @ubuntu.com addresses and (Ubuntu) comments (as well as @example.edu addresses and (Example University) comments and @example.com addresses and (Example Corp) comments) are out there too.
nicknames already evident
John Q. Public (Johnny) <johnqpublic@example.net>
John Q. Public (wackydude) <wackydude@example.net>
Again, the information these comments are providing offers no clear disambiguation from the info already contained in the name and e-mail address, and just muddies the water about what the people who certify this identity should actually be trying to verify before they make their certification.
"Work"
John Q. Public (Work) <johnqpublic@example.com>
if John's correspondents know that he works for Example Corp, then "Work" isn't helpful to them, because they already know this as the address that they're writing to him with. If they don't know that, then they probably aren't writing to him at work, so they don't need this comment either. The same problem appears (for example) with literal comments of (School) next to their @example.edu address.
This is my nth try at this crazy system!
John Q. Public (This is my second key) <johnqpublic@example.com>
John Q. Public (This is my primary key) <johnqpublic@example.com>
John Q. Public (No wait really use this one) <johnqpublic@example.com>
OpenPGP is confusing, and it can be tricky to get it right. We all know :) This is still not part of John's identity. If you want to designate a key as your preferred key, keep it up-to-date, get people to certify it, and revoke or expire your old keys. People who care can look at the timestamps on your keys and tell which ones are the most recent ones. You do have a revocation certificate for your key handy just in case you lose it, right?
Don't use this key
John Q. Public (Old key, do not use) <johnqpublic@example.com>
John Q. Public (Please only use this through September 2004) <johnqpublic@example.com>
This kind of sentiment is better expressed by revoking the key in question or setting an expiration time on the key or User ID self-sig directly. This sentiment is not part of John's identity, and shouldn't be included as though it were.
"none"
John Q. Public (none) <johnqpublic@example.com>
sigh. This is clearly someone getting mixed up by the UI.
I use strong crypto!
John Q. Public (3092 bits of RSA) <johnqpublic@example.com>
This comment refers to the strength of the key material, or the algorithms preferred by the user. Since the User ID is associated with the key material already, people who care about this information can get it from the key directly. This is also not part of the user's actual identity.
"no comment"
John Q. Public (no comment) <johnqpublic@example.com>
This is actually not uncommon (some keyservers reply "too many matches!"). It shows that the user is witty and can think on their feet (at least once), but it is still not part of the user's identity.
But wait (i hear you say)! I have a special case that actually is a legitimate use of the comment field that cannot be expressed in OpenPGP in any other way! I'm sure that such cases exist. I've even seen one or two of them. The fact that one or two cases exist does not excuse the fact that that overwhelming majority of these comments in OpenPGP User IDs are a mistake, caused only by bad UI design that prompts people to put something (anything!) in the empty box (or on the command prompt, depending on your preference). And this mistake is one of the thousand papercuts that inhibits the robust growth of the OpenPGP certification network that some people call the "web of trust". Let's avoid them so we can focus on the other 999 papercuts. Please don't use comments in your OpenPGP User ID. And if you make a user interface for OpenPGP that prompts the user to decide on a new User ID, please don't include a prompt for "Comment" unless the user has already certified that they are really and truly a special special snowflake. Thanks!Tags: openpgp, ui

3 May 2013

Daniel Kahn Gillmor: It's Advertising all the way down

Today i saw a billboard on the side of a bus. It was from a cable TV channel, bragging about how well-connected their viewers are (presumably on the internet, social media, blogs, etc). It shows a smiling, attractive man, with text next to him saying something like "I told 9000 people what smartphone to buy". What happened here? And almost all of these steps count as positive economic activity when we try to measure whether the US economy is healthy. I am depressed by this tremendous waste of time and effort.Tags: advertising

8 March 2013

Vasudev Kamath: Multi-Arch lintian check for font packages

I've provided a new patch #701061 for lintian to warn about font packages that are not marked as Multi-Arch foreign or allowed. Its already included in the lintian by Neils Thykier and will be part of version 2.5.12. The following tag has been implemented
font-package-not-multi-arch-foreign
A Bit of History for this implementation is as follows: We got a bug report on one of the fonts in pkg-fonts team that its not been installed in i386 architecture on a amd64 multi-arch system #694864. We were first confused but Daniel Kahn Gillmor pointed that we indeed need to mark all font packages as Multi-Arch foreign. He proposed that we should write a lintian check for this which I volunteered to do and then forgot!. Recently I was checking my QA page and landed into Ubuntu's page of my package where I saw they were patching imported font package and marking them as Multi-Arch: foreign and I suddenly remembered my promise! and this patch was the result of same enlightenment :-). Since there is huge number of font packages maintained by pkg-fonts devel we targeted this for Jessie release.
Here by I request all font package maintainers to consider marking their packages as Multi-Arch foreign. I also request people to join us on pkg-fonts-devel and help us doing this for all font packages maintained by the team, we really lack people in the team.

28 February 2013

Gunnar Wolf: Dkg: Unwrap it with Blender. And ask @octagesimal / @casyopea !

Daniel tells his story building a wooly mammoth, and throws some ideas on how this could be implemented easily with free software. But if I read his post correctly, Daniel still misses the precise ways to do it. Our friends Octavio and Claudia (twitted hereby) have given some Blender courses here at our classroom at home (Guys! Come again! We miss you!), and host the Spanish-speaking g-blender community. At one of their courses, they showed how to model an object/character, and in order to color/texture its parts, you can unwrap it This process yields a flattened image with the surfaces that build your object, that you can then color. Well, you can also use it as a base pattern to cut and sew your plush! It is not meant to be used for this (although it works), so it won't give you the extra tabs to be sewn in place, and the joints might not be at the most comfortable places. But it is base you can work from.

Daniel Kahn Gillmor: Make a Woolly Mammoth (thanks, inkscape!)

I feel like i've done a lot of blogging recently about failing to do things with proprietary software. That's annoying. This post is about something i made successfully with free software (and some non-software crafting): I made a Woolly Mammoth for my nephew!I documented the pattern (with pictures!) that i came up with using Inkscape (and used markdown, pandoc, emacs, pdftk, and other free software in the process). i've also published the source for the pattern via git if you want to modify it:
git clone git://lair.fifthhorseman.net/~dkg/woolly
Writing up the documentation makes me realize that i don't know of any software tools designed specifically for facilitating fabric/craft construction. Some interesting software ideas: Anyone have any ideas?Tags: brainstorming, crafting, inkscape

23 February 2013

Tiago Bortoletto Vaz: #DPLgame

in a random disorder:
MadameZou - photo by Andrew McMillan, CC-BY-SA 2.0 dkg moray h01ger

1 February 2013

Daniel Kahn Gillmor: proprietary software activation fail

i have a colleague who is forced by work situations to use Windows. Somehow, I'm the idiot^W^W^W^W^Wfriendly guy who gets tapped to fix it when things break. Well, this time, the power supply broke. As in, dead, no lights, no fan, no nothing. No problem, though, the disk is still good, and i've got a spare machine lying around; and the spare is actually superior hardware to the old machine so it'll be an upgrade in addition to a fix. Nice! So i transplant the disk and fire up the new chassis. But WinXP fails to boot with a lovely "0x0000007b" BSOD. The internet tells me that this might mean it can't find its own disk. OK, pop into the new chassis' BIOS, tell it to run the SATA ports in "legacy IDE" mode, and try again. Now we get a "0x0000007e" BSOD. Some digging on the 'net makes me think it's now complaining now about the graphics driver. Hmm. Well, i figure i can probably work around that by installing new drivers from Safe Mode. So i reboot into Safe Mode. Success! It boots to the login screen in Safe Mode. And, mirabile dictu, i happen to know the Administrator password. I put it in, and get a message that this Windows installation isn't "activated" yet -- presumably because the hardware has changed out from under it. And by the way, i'm not allowed to log in via safe mode until it's activated. So please reboot to "normal" Windows and activate it first. Except, of course, the whole reason i'm booting into safe mode was because normal Windows gives a BSOD. Grrrr. Who thought up this particular lovely catch-22? OK, change tactics. Scavenging the scrap bin turns up a machine with a failed mainboard, but a power supply with all the right leads. It's rated for about 80W less than the old machine's failed supply, but i figure if i rip out the DVD-burner and the floppy drive maybe it will hold. Oh, and the replacement power supply doesn't physically fit the old chassis, but it hangs halfway out the back and sort of rattles around a bit. I sacrifice the rest of the scrap machine, rip out its power supply, stuff the power supply into the old chassis, swap the original disk back in, and ... it boots successfully, finally. That was the shorter version of the story :P So now my colleague has a horrible mess of a frankencomputer which is more likely to fail again in the future, instead of a nice shiny upgrade. Why? Because Microsoft's need to control the flow of software takes priority over the needs of their users. This is what you get when you let Marketing and BizDev drive your technical decisions. Do i still need to explain why i prefer free software? Tags: griping, windows

29 January 2013

Daniel Kahn Gillmor: visualizing MIME structure

Better debugging tools can help us understand what's going on with MIME messages. A python scrap i wrote a couple years ago named printmimestructure has been very useful to me, so i thought i'd share it. It reads a message from stdin, and prints a visualisation of its structure, like this:
0 dkg@alice:~$ printmimestructure < 'Maildir/cur/1269025522.M338697P12023.monkey,S=6459,W=6963:2,Sa' 
 multipart/signed 6546 bytes
  text/plain inline 895 bytes
  application/pgp-signature inline [signature.asc] 836 bytes
0 dkg@alice:~$ 
You can fetch it with git if you like:
git clone git://lair.fifthhorseman.net/~dkg/printmimestructure
It feels silly to treat this ~30 line script as its own project, but i don't know of another simple tool that does this. If you know of one, or of something similar, i'd love to hear about it in the comments (or by sending me e-mail if you prefer). If it's useful for others, I'd be happy to contribute printmimestructure to a project of like-minded tools. Does such a project exist? Or if people think it would be handy to have in debian, i can also package it up, though that feels like it might be overkill. and oh yeah, as always: bug reports, suggestions, complaints, and patches are welcome :)Tags: debugging, mime, python

17 January 2013

Russ Allbery: A few last thoughts on Aaron Swartz

Daniel Kahn Gillmor has a very good blog post. You should read it. It includes a thoughtful rebuttal to some of my earlier thoughts about activism. I think I'm developing a richer understanding of where I see boundaries here, but after my last post, I also realized that by focusing on the specific details of what should have been a minor alleged crime, I'm derailing. Swartz did so much else. I made a note to come back to the more theoretical discussion in six months; now isn't the time. Now is the time to celebrate open content and all of the things Swartz achieved. (But thank you very much to the multiple people who have pointed out flaws in my reasoning and attempted approach.) Hopefully, it's also an opportunity to keep the pressure on for a saner and less abusive judicial system that doesn't threaten people with ridiculous and disproportional punishment in order to terrify them into unwarranted plea bargains. The petition I mentioned has reached nearly 40,000 signatures and passed the threshold (at the time it was posted) for forcing a White House response. Probably more importantly, it also seems to be creating the feedback cycle that I was hoping to see: the popularity of the petition is causing this story to stay in the news cycle and continue to be written about, which in turn drives more signatures to the petition. I'm not particularly hopeful that the Obama administration cares about the vast and deep problems with our criminal justice system, but I'm somewhat more hopeful that they, like most politicians, hate news cycles that they don't control. The longer this goes on, the stronger the incentive to find some way to make it go away, which could lead to real disciplinary action. A key committee in the US House of Representatives is starting a formal investigation. One of my local representatives has proposed modifying the US federal law on computer fraud and abuse to remove violations of terms of service from the definition of the crime. (I don't have much hope that this will pass when proposed by the minority party in a fairly hostile House, but the mere act of proposing it keeps the news focus on.) Glenn Greenwald has a (typically long-winded) round-up of news in the Guardian. Note that both Greenwald and Declan McCullagh link directly to the petition in articles in mainstream news outlets. One thing that slacktivism can do is perpetuate a news cycle until it gets more uncomfortable for people in power. It's still nowhere near as effective as the types of activism that Swartz was so good at, but in this specific case I think one gets a reasonable return on one's five-minute investment of effort. I'm going to stop talking about this now, since other people are a lot better at this sort of post than I am. But one last link: the medical community has a related open content problem, and theirs is also killing people. Possibly people you know. If all of this has inspired you, as it has me, to care even more about open content, be watching the push for open access to clinical trial data. More background is in Ben Goldacre's TED talk.

15 January 2013

Daniel Kahn Gillmor: in memory of Aaron Swartz

I was upset to learn about Aaron Swartz's death last week. I continue to be upset about his loss, and about our loss. He didn't just show promise of great things to come in the future -- he had already done more work for the public good than many of us will ever do. I'd only met him IRL a couple times, but (like many others) i had encountered him on the 'net in many places. He was a good person, someone i didn't need to always agree with to respect. I read Russ Allbery's posts about Aaron and "slacktivism" with much appreciation. I had been ambivalent about signing the whitehouse.gov petition asking for the removal of the prosecutor for overreach, because I generally distrust the effectiveness of online petitions (and offline petitions, for that matter). But Russ's analysis convinced me to go ahead and sign it. The petition is concrete, clear (despite wanting a grammatical proofread), and actionable. For people willing to go beyond petition signing to civil disobedience, The Aaron Swartz Memorial JSTOR Liberator is an option. It makes it straightforward to potentially violate the onerous JSTOR terms of service by re-publishing a public-domain article from JSTOR to archive.org, where it will be accessible to anyone directly. As someone who builds and maintains information/communications infrastructure, i have very mixed feelings about most online civil disobedience, since it often takes the form of a Distributed Denial of Service (DDoS) attack of some sort. DDoS attacks of public services are notoriously difficult to defend against without having huge resources to throw at the problem, so encouraging participation in a DDoS often feels a little bit like handing out cans of gasoline when you know that everyone is living in a house of straw. However, the JSTOR Liberator is not a DDoS at all -- it's simply a facilitation of people breaking the JSTOR Terms of Service (ToS), some of the same terms that Aaron was facing charges for violating. So it is a well-targeted way to demonstrate that the prosecutions were overreaching. I wanted to take issue with one of Russ' statements, though. In his second post about the situation, Russ wrote:
Social activism and political disobedience are important and often valuable things, but performing your social activism using other people's stuff is just rude. I think it can be a forgivable rudeness; people can get caught up in the moment and not realize what they're doing. But it's still rude, and it's still not the way to go about civil disobedience.
While i generally agree with Russ' thoughtful consideration of consent, I have to take issue with this elevation of some sort of hyper-extended property right over the moral agency that drives civil disobedience. To use someone else's property for the sake of a just cause without damaging the property or depriving the owner of its use is not "forgivable rudeness" -- it's forgivable, laudable even, because it is just. And the person using the property doesn't need to be "caught up in the moment and not realize what they're doing" for it to be acceptable. Civil disobedience often involves putting some level of inconvenience or discomfort on other people, including innocent people. It might be the friends and family of the activist who have to deal with the jail time; it might be the drivers stuck in a traffic jam caused by a demonstration; it might be the people forced to shop elsewhere because the store's doors are barricaded by protestors. All of these people could be troubled by the civil disobedience more than MIT's network users and admins were troubled by Aaron's protest, and that doesn't make the protests described worse or "not the way to go about civil disobedience." The trouble highlights a more significant injustice, and in its troubling way does what it can to help right it. Aaron was a troublemaker, and a good one. He will be missed. Tags: aaronsw

Next.

Previous.